Many real-world applications such as gene expression clustering and collaborative filtering can be modeled by matrix factorization. A common challenge for applying matrix factorization is determining the dimensionality of the latent matrices from data. Indian Buffet processes enable us to apply the nonparametric Bayesian machinery to address this challenge. Given data, however, learning nonparametric Bayesian matrix factorization models remains a difficult task. Based on equivalent classes on infinite matrices, we propose a novel variational Bayesian method as an efficient alternative to current Monte Carlo methods for learning these models. Inspired by the success of nonnegative matrix factorization on many learning problems, we also develop a new model, called infinite nonnegative matrix factorization.
For efficient inference on these new models, we develop an approximate inference method based on the power-EP framework. Experimental results show favorable performance of our method compared to Gibbs sampling and particle filter methods. In addition, we demonstrate the effectiveness of the new methods on collaborative filtering and text clustering tasks.
Uploaded by chris85 on